数据驱动模型发现中的中央挑战是存在隐藏或潜伏的变量,这些变量不会直接测量,而是动态重要。 TAKENS的定理提供了在可能随时间延迟信息中增加这些部分测量的条件,导致吸引物,这是对原始全状态系统的扩散逻辑。然而,回到原始吸引子的坐标变换通常是未知的,并且学习嵌入空间中的动态仍然是几十年的开放挑战。在这里,我们设计自定义深度AutoEncoder网络,以学习从延迟嵌入空间的坐标转换到一个新的空间,其中可以以稀疏,封闭的形式表示动态。我们在Lorenz,R \“Ossler和Lotka-Volterra系统上,从单个测量变量的学习动态展示了这种方法。作为一个具有挑战性的例子,我们从混乱的水车视频中提取的单个标量变量中学到一个洛伦兹类似物得到的建模框架结合了深入的学习来揭示可解释建模的非线性动力学(SINDY)的揭示有效坐标和稀疏识别。因此,我们表明可以同时学习闭合模型和部分的坐标系观察到的动态。
translated by 谷歌翻译
自动化数据驱动的建模,直接发现系统的管理方程的过程越来越多地用于科学界。 Pysindy是一个Python包,提供用于应用非线性动力学(SINDY)方法的稀疏识别到数据驱动模型发现的工具。在Pysindy的这一主要更新中,我们实现了几种高级功能,使得能够从嘈杂和有限的数据中发现更一般的微分方程。延长候选术语库,用于识别致动系统,部分微分方程(PDE)和隐式差分方程。还实施了包括Sindy和合奏技术的整体形式的强大配方,以提高现实世界数据的性能。最后,我们提供了一系列新的优化算法,包括多元稀疏的回归技术和算法来强制执行和促进不等式约束和稳定性。这些更新在一起,可以在文献中尚未报告的全新SINDY模型发现能力,例如约束PDE识别和使用不同稀疏的回归优化器合并。
translated by 谷歌翻译
Given the success with in-context learning of large pre-trained language models, we introduce in-context learning distillation to transfer in-context few-shot learning ability from large models to smaller models. We propose to combine in-context learning objectives with language modeling objectives to distill both the ability to read in-context examples and task knowledge to the smaller models. We perform in-context learning distillation under two different few-shot learning paradigms: Meta In-context Tuning (Meta-ICT) and Multitask In-context Tuning (Multitask-ICT). Multitask-ICT performs better on multitask few-shot learning but also requires more computation than Meta-ICT. Our method shows consistent improvements for both Meta-ICT and Multitask-ICT on two benchmarks: LAMA and CrossFit. Our extensive experiments and analysis reveal that in-context learning objectives and language modeling objectives are complementary under the Multitask-ICT paradigm. In-context learning objectives achieve the best performance when combined with language modeling objectives.
translated by 谷歌翻译
Identifying spurious correlations learned by a trained model is at the core of refining a trained model and building a trustworthy model. We present a simple method to identify spurious correlations that have been learned by a model trained for image classification problems. We apply image-level perturbations and monitor changes in certainties of predictions made using the trained model. We demonstrate this approach using an image classification dataset that contains images with synthetically generated spurious regions and show that the trained model was overdependent on spurious regions. Moreover, we remove the learned spurious correlations with an explanation based learning approach.
translated by 谷歌翻译
This paper introduces the shared task of summarizing documents in several creative domains, namely literary texts, movie scripts, and television scripts. Summarizing these creative documents requires making complex literary interpretations, as well as understanding non-trivial temporal dependencies in texts containing varied styles of plot development and narrative structure. This poses unique challenges and is yet underexplored for text summarization systems. In this shared task, we introduce four sub-tasks and their corresponding datasets, focusing on summarizing books, movie scripts, primetime television scripts, and daytime soap opera scripts. We detail the process of curating these datasets for the task, as well as the metrics used for the evaluation of the submissions. As part of the CREATIVESUMM workshop at COLING 2022, the shared task attracted 18 submissions in total. We discuss the submissions and the baselines for each sub-task in this paper, along with directions for facilitating future work in the field.
translated by 谷歌翻译
Summarizing novel chapters is a difficult task due to the input length and the fact that sentences that appear in the desired summaries draw content from multiple places throughout the chapter. We present a pipelined extractive-abstractive approach where the extractive step filters the content that is passed to the abstractive component. Extremely lengthy input also results in a highly skewed dataset towards negative instances for extractive summarization; we thus adopt a margin ranking loss for extraction to encourage separation between positive and negative examples. Our extraction component operates at the constituent level; our approach to this problem enriches the text with spinal tree information which provides syntactic context (in the form of constituents) to the extraction model. We show an improvement of 3.71 Rouge-1 points over best results reported in prior work on an existing novel chapter dataset.
translated by 谷歌翻译
By utilizing only depth information, the paper introduces a novel but efficient local planning approach that enhances not only computational efficiency but also planning performances for memoryless local planners. The sampling is first proposed to be based on the depth data which can identify and eliminate a specific type of in-collision trajectories in the sampled motion primitive library. More specifically, all the obscured primitives' endpoints are found through querying the depth values and excluded from the sampled set, which can significantly reduce the computational workload required in collision checking. On the other hand, we furthermore propose a steering mechanism also based on the depth information to effectively prevent an autonomous vehicle from getting stuck when facing a large convex obstacle, providing a higher level of autonomy for a planning system. Our steering technique is theoretically proved to be complete in scenarios of convex obstacles. To evaluate effectiveness of the proposed DEpth based both Sampling and Steering (DESS) methods, we implemented them in the synthetic environments where a quadrotor was simulated flying through a cluttered region with multiple size-different obstacles. The obtained results demonstrate that the proposed approach can considerably decrease computing time in local planners, where more trajectories can be evaluated while the best path with much lower cost can be found. More importantly, the success rates calculated by the fact that the robot successfully navigated to the destinations in different testing scenarios are always higher than 99.6% on average.
translated by 谷歌翻译
Online clothing catalogs lack diversity in body shape and garment size. Brands commonly display their garments on models of one or two sizes, rarely including plus-size models. In this work, we propose a new method, SizeGAN, for generating images of garments on different-sized models. To change the garment and model size while maintaining a photorealistic image, we incorporate image alignment ideas from the medical imaging literature into the StyleGAN2-ADA architecture. Our method learns deformation fields at multiple resolutions and uses a spatial transformer to modify the garment and model size. We evaluate our approach along three dimensions: realism, garment faithfulness, and size. To our knowledge, SizeGAN is the first method to focus on this size under-representation problem for modeling clothing. We provide an analysis comparing SizeGAN to other plausible approaches and additionally provide the first clothing dataset with size labels. In a user study comparing SizeGAN and two recent virtual try-on methods, we show that our method ranks first in each dimension, and was vastly preferred for realism and garment faithfulness. In comparison to most previous work, which has focused on generating photorealistic images of garments, our work shows that it is possible to generate images that are both photorealistic and cover diverse garment sizes.
translated by 谷歌翻译
This paper investigates how hate speech varies in systematic ways according to the identities it targets. Across multiple hate speech datasets annotated for targeted identities, we find that classifiers trained on hate speech targeting specific identity groups struggle to generalize to other targeted identities. This provides empirical evidence for differences in hate speech by target identity; we then investigate which patterns structure this variation. We find that the targeted demographic category (e.g. gender/sexuality or race/ethnicity) appears to have a greater effect on the language of hate speech than does the relative social power of the targeted identity group. We also find that words associated with hate speech targeting specific identities often relate to stereotypes, histories of oppression, current social movements, and other social contexts specific to identities. These experiments suggest the importance of considering targeted identity, as well as the social contexts associated with these identities, in automated hate speech classification.
translated by 谷歌翻译
解释性互动学习(XIL)收集了有关视觉模型解释的用户反馈,以实现基于人类的交互式学习方案。不同的用户反馈类型将对用户体验以及收集反馈相关的成本产生不同的影响,因为不同的反馈类型涉及不同级别的图像注释。尽管XIL已被用来改善多个域中的分类性能,但不同的用户反馈类型对模型性能和解释精度的影响尚未得到很好的研究。为了指导未来的XIL工作,我们比较图像分类任务中两种不同用户反馈类型的有效性:(1)指示算法忽略某些虚假图像特征,以及(2)指导算法专注于某些有效的图像特征。我们使用基于梯度加权类激活映射(GARGCAM)XIL模型的解释来支持两种反馈类型。我们表明,与用户反馈相比,识别和注释的虚假图像特征与用户反馈相比,该模型可以找到出色的分类和解释精度,该功能告诉模型专注于有效的图像特征。
translated by 谷歌翻译